Goto

Collaborating Authors

 depth-indexed linearized dependency tree


Fine-grained Controllable Text Generation through In-context Learning with Feedback

Thillainathan, Sarubi, Koller, Alexander

arXiv.org Artificial Intelligence

We present a method for rewriting an input sentence to match specific values of nontrivial linguistic features, such as dependency depth. In contrast to earlier work, our method uses in-context learning rather than finetuning, making it applicable in use cases where data is sparse. We show that our model performs accurate rewrites and matches the state of the art on rewriting sentences to a specified school grade level.